2023-08-10 11:32:24.AIbase.304
IBM Research: AI Chatbots Easily Deceived into Generating Malicious Code
{1: IBM Research found that large language models such as GPT-4 can be easily deceived to generate malicious code or provide false security recommendations. 2: Researchers discovered that a basic understanding of English and some background knowledge about the model's training data is sufficient to easily deceive AI chatbots. 3: Different AI models vary in their susceptibility to deception, with GPT-3.5 and GPT-4 being more easily tricked.}